Neural Word Embedding as Implicit Matrix Factorization
نویسندگان
چکیده
We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS’s solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS’s factorization.
منابع مشابه
Word Embedding Revisited: A New Representation Learning and Explicit Matrix Factorization Perspective
Recently significant advances have been witnessed in the area of distributed word representations based on neural networks, which are also known as word embeddings. Among the new word embedding models, skip-gram negative sampling (SGNS) in the word2vec toolbox has attracted much attention due to its simplicity and effectiveness. However, the principles of SGNS remain not well understood, except...
متن کاملA New Document Embedding Method for News Classification
Abstract- Text classification is one of the main tasks of natural language processing (NLP). In this task, documents are classified into pre-defined categories. There is lots of news spreading on the web. A text classifier can categorize news automatically and this facilitates and accelerates access to the news. The first step in text classification is to represent documents in a suitable way t...
متن کاملWord Embeddings via Tensor Factorization
Most popular word embedding techniques involve implicit or explicit factorization of a word co-occurrence based matrix into low rank factors. In this paper, we aim to generalize this trend by using numerical methods to factor higher-order word co-occurrence based arrays, or tensors. We present four word embeddings using tensor factorization and analyze their advantages and disadvantages. One of...
متن کاملA Probabilistic Model for Collaborative Filtering with Implicit and Explicit Feedback Data
Collaborative ltering (CF) is one of themost ecient ways for recommender systems. Typically, CF-based algorithms analyze users’ preferences and items’ aributes using one of two types of feedback: explicit feedback (e.g., ratings given to item by users, like/dislike) or implicit feedback (e.g., clicks, views, purchases). Explicit feedback is reliable but is extremely sparse; whereas implicit ...
متن کاملOnline Learning of Interpretable Word Embeddings
Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to mainta...
متن کامل